Brain and Language
○ Elsevier BV
Preprints posted in the last 90 days, ranked by how well they match Brain and Language's content profile, based on 11 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
Kim, J.; Lee, S.; Nam, K.
Show abstract
A central question in psycholinguistics in visual word recognition is whether morphologically complex words are obligatorily decomposed into stems and affixes during visual word recognition or whether whole-word access can occur when forms are frequent and familiar. The present study investigated how morphological complexity and lexical frequency jointly shape neural responses by leveraging Korean nominal inflection, whose transparent stem-suffix structure permits a clean dissociation between base (stem) frequency and surface (whole-word) frequency. Twenty-five native Korean speakers completed a rapid event-related fMRI lexical decision task involving simple and inflected nouns that varied parametrically in both frequency measures. Representational similarity analysis (RSA) revealed robust encoding of surface frequency--but not base frequency--in the inferior frontal gyrus (IFG) pars opercularis and supramarginal gyrus (SMG), with significantly stronger correlations for inflected than simple nouns. Univariate analyses converged with this result: surface frequency selectively increased activation for inflected nouns in inferior parietal regions, whereas base frequency showed no reliable effects in any ROI. These findings challenge models positing obligatory pre-lexical decomposition, instead supporting accounts in which morphological processing is shaped by post-lexical, usage-driven lexical statistics. Taken together, our findings shed light on a distributed perspective on morphological processing, suggesting that structural and statistical factors jointly constrain access to morphologically complex forms.
Caffarra, S.; Costello, B.; Farina, N.; Dunabeitia, J. A.; Carreiras, M.
Show abstract
The cognitive factors that enable us to be proficient readers can greatly vary across individuals. The case of skilled deaf readers is emblematic as it shows that high reading performances can be achieved even when lifelong acoustic experience is absent or minimal. Here we present a set of experiments investigating how alternative strategies of orthographic processing can lead to high levels of reading proficiency. Four EEG studies compared behavioral and brain correlates of orthographic processing in skilled deaf readers and matched hearing controls. Using single word recognition and priming paradigms, we investigated two pillars of orthographic processing: letter identity and letter position. Our findings show that, although both groups had similarly accurate reading performance, skilled deaf readers were faster, and they consistently differ from hearing controls in the way they process letter identity. This group difference was observed in both lexical and sublexical tasks and was specifically related to the identity of orthographic representations, regardless of the visual form of the written stimuli (such as character visual similarity and letter case). These findings uncover alternative strategies that make possible high reading performance, even in the absence of acoustic experience. Public Significance StatementThis research identifies alternative orthographic strategies that improve single-word reading efficiency and can potentially serve as effective compensatory tools when phonological processing is impaired.
Visibelli, E.; Flo, A.; Baraldi, E.; Benavides-Varela, S.
Show abstract
During the first period of life, human infants rapidly and effortlessly acquire the languages they are exposed to. Although memory is central to this process, the nature of early verbal memory systems and the factors that determine retention and forgetting remain largely unknown. Behavioural and brain measures have demonstrated memory formation in newborns. However, word traces fade in the face of acoustic overlap, leading to interference and forgetting. Here, we investigate whether speakers identity changes facilitate the separation into distinct acoustic episodes and the creation of non-overlapping verbal memories. Newborns (0-4 days-old) were tested in a familiarization-interference-test protocol, while neural cortical activity was recorded using functional Near-Infrared Spectroscopy (fNIRS). The results showed higher neural activation to novel words than to familiar ones during the test phase, indicating that the infants recognized the familiar words despite potentially interfering sounds. The recognition response was measured over the left inferior frontal gyrus (IFG) and superior temporal gyrus (STG) areas known to be crucial for encoding auditory information and language processing. The neural response also included the right IFG and STG, involved in interpreting vocal social cues and speaker recognition. The results indicate that speaker identity is a key feature in the formation of verbal memories from birth, facilitating separability, possibly through early source-content binding (i.e., what-who), a precursor to fully mature episodic memory. Impact StatementSpeaker identity is a distinguishing feature at birth and highlights the episodic nature of humans first-stored verbal memories.
Keshavarzi, M.; Moore, B. C. J.; Goswami, U.
Show abstract
Neural oscillations in the delta (0.5-4 Hz) and theta (4-8 Hz) bands play a key role in tracking the temporal structure of speech. According to Temporal Sampling (TS) theory, dyslexia arises from atypical entrainment of these low-frequency oscillations to speech during infancy and childhood, which is particularly disruptive regarding phonological encoding. However, studies of adults with dyslexia have rarely examined both delta and theta cortical tracking under naturalistic listening conditions, and have not measured delta-band cortical tracking. Using EEG, here we focused on delta and theta band cortical tracking continuous natural speech by adults with and without dyslexia, applying a decoding analysis previously used with dyslexic children. Forty-eight English-speaking adults (24 dyslexic, 24 control) listened to a 16-minute continuous spoken narrative while EEG was recorded. Neural decoding of the speech envelope was quantified using backward multivariate Temporal Response Function (mTRF) models applied at two levels: a between-group analysis evaluating group-level differences in neural representation patterns, and a within-participant analysis assessing individual decoding accuracy. Cerebro-acoustic coherence was computed in parallel to provide a complementary measure of neural-speech synchronisation. Additional analyses examined band power, cross-frequency phase-amplitude coupling (PAC), and cross-frequency phase-phase coupling (PPC). Dyslexic adults exhibited less accurate delta- and theta-band decoding in the between-group analysis and reduced theta-band decoding accuracy in the within-participant analysis, alongside reduced coherence in both bands and increased delta-band power, particularly over the right temporal region. No group differences were found for PAC or PPC. HighlightsO_LIAdults with dyslexia showed reduced delta- and theta-band speech decoding C_LIO_LICerebro-acoustic coherence was reduced in delta and theta bands in dyslexia group C_LIO_LIDelta-band power was increased in dyslexia, especially over right temporal region C_LIO_LICross-frequency coupling did not differ between adults with and without dyslexia C_LI
Cosper, S. H.; Bachmann, L.; Sehmer, E.; Steidel, A.; Li, S.-C.
Show abstract
Auditory associative word learning has been shown in infants and proven to be a difficult task in young adults, where learning is only successful under specific conditions. In order to better understand the transition from successful infant auditory associative word learning to the challenging adult learning, we tested 5-6-year-olds and 9-10-year-olds in a sequential associative task to investigate their ability in associating novel pseudowords with environmental sounds. Additionally, we explored short-term episodic recognition memory, language development, sex, and musical training and their effects on behavioral and electrophysiological measures of word learning. EEG data were collected to assess word learning in an initial training phase (consistent vs. inconsistent pairings) and a subsequent testing phase (matching vs. violated pairings) with additional button-press reactions for behavioral learning data. While learning effects were seen in the first half of the training phase in younger children, no early effects of learning were found in older children. Only musically trained 9-10-year-olds indicated word learning in the second half of the training phase. In the testing phase, only non-musically trained 9-10-year-olds revealed trend-level N400-like responses. Short-term memory (auditory-verbal, auditory-nonverbal, and visual-nonverbal) and language development improved with age, but only visual-nonverbal short-term recognition memory was positively correlated with improved auditory associative word learning. Unlike cross-modal visual associative word learning, our results, together with earlier findings in infants and young adults, suggest a difficulty in auditory associative word learning beyond infancy, which is sustained from childhood to young adulthood.
Kim, J.; Lee, S.; Nam, K.
Show abstract
A central question in visual word recognition concerns whether orthographic and phonological codes are coordinated sequentially or in parallel during lexical access. Korean Hangul, an alpha-syllabic writing system with morphophonemic spelling principles, allows independent manipulation of orthographic and phonological syllable overlap within a single experimental design. In a masked priming lexical decision task with EEG, we contrasted orthographically identical primes (e.g., -), phonologically overlapping primes (e.g., -), and unrelated primes. Event-related potentials and time-frequency representations (theta: 4-8 Hz, lower beta: 13-20 Hz, upper beta: 20-30 Hz) were analyzed to capture both evoked and oscillatory neural dynamics. Orthographic priming produced a cascade of facilitative effects: early fronto-central P200 enhancement (150-250 ms) with upper beta synchronization (30-290 ms), followed by centro-parietal N400 reduction (350-550 ms) with frontal theta suppression (400-730 ms), and behavioral facilitation. Phonological priming, by contrast, elicited sustained lower beta activity over central regions (310-590 ms) but produced no early electrophysiological modulation and no behavioral facilitation. This spatiotemporal dissociation provides converging neural evidence that orthographic syllable processing emerges at pre-lexical stages and cascades into lexical-level processing, whereas phonological syllable effects are confined to later stages of lexical access. These findings provide support for a sequential or cascaded account of orthographic-phonological coordination, as predicted by dual-route models, while challenging strong forms of parallel activation, and suggest that the alpha-syllabic structure of Korean may enable a processing strategy in which orthographic parsing serves as an efficient entry route to the lexicon.
Vivion, M.; Mathy, F.; Guida, A.; Mondot, L.; Ramanoel, S.
Show abstract
Spatialization in working memory refers to the spatial coding of non-spatial information along a mental horizontal line when encoding verbal material. This phenomenon is thought to support working memory by facilitating order encoding. Although it has been observed for both visually and auditorily presented stimuli, no direct comparison has yet examined whether these modalities rely on similar neural mechanisms. In this study, we investigated whether spatialization in visual and auditory modalities involves shared or distinct patterns of activity within the working-memory network. Forty-nine participants performed both a visual and an auditory working memory SPoARC task of the same verbal material, allowing to study the cortical patterns associated with distinct serial positions at both encoding and recognition across sensory modalities. Whole-brain analyses revealed similar frontoparietal networks across conditions. In addition, a representational similarity analysis (RSA) was conducted to assess the similarity of neural patterns between early and late serial positions in a sequence and across sensory modalities. This multivoxel pattern analysis revealed modality-dependent patterns distinguishing early and late positions in the inferior frontal gyrus. Additional modality-specific effects were observed in the anterior intraparietal sulcus in the visual modality and in the posterior hippocampus in the auditory modality. Drawing on the framework proposed by Bottini & Doeller (2020), we propose that order decoding in the IPS might reflect a low-dimensional spatial coding of order (e.g., along a horizontal axis), whereas order decoding in the hippocampus might reflect higher-dimensional spatial representations or temporal representations.
Allen, S. C.; Koukouvinis, S.; Varjopuro, S. M.; Keitel, A.
Show abstract
Cortical tracking of acoustic features is essential for the neural processing of continuous stimuli such as speech and music. For example, it has been shown that children with dyslexia show atypical cortical tracking. This tracking may therefore reflect a fundamental auditory temporal processing mechanism supporting literacy more generally. In the current pre-registered study, we tested the hypothesis that cortical tracking of speech and music predicts reading ability in healthy young adults (N = 32), evaluated through a lexical decision task. Participants first completed an online session in which they performed a lexical decision task to assess their reading skills. This was followed by an electroencephalography (EEG) session, in which participants listened to a naturalistic short story and a music track. Using mutual information, we showed that neural activity aligned to both speech and music across a wide range of frequencies. Interestingly, cortical tracking was stronger for speech at very low frequencies, while it was stronger for music at higher frequencies. Critically, cortical tracking predicted reaction times in the lexical decision task in a frequency-dependent manner: stronger delta-band tracking (~1-3 Hz) for both speech and music was associated with faster reaction times, whereas stronger alpha-band tracking (~12 Hz) for speech was associated with slower reaction times. These findings remained significant even when controlling for stimulus type, age, musical experience and reading enjoyment. These results suggest that cortical tracking of speech and music reflect a domain-general temporal processing mechanism that is associated with reading ability beyond stimulus-specific features, and beyond development. These findings advance the neurobiological underpinnings of literacy and could potentially be leveraged for developing new reading interventions.
Bahar, N.; Cler, G. J.; Asaridou, S. S.; Smith, H. J.; Willis, H. E.; Healy, M. P.; Chughtai, S.; Haile, M.; Krishnan, S.; Watkins, K. E.
Show abstract
Children with developmental language disorder (DLD) have persistent language learning difficulties and often perform poorly on pseudoword repetition, a task that probes phonological, memory, and speech-motor processes that support vocabulary acquisition. Research on the neural basis of pseudoword repetition in DLD is limited. We used whole-brain functional MRI (fMRI) to examine pseudoword repetition and repetition-based learning in 46 children with DLD (ages 10-15 years) and 71 age-matched children with typical language development. During scanning, children heard and repeated pseudowords paired with visual referents, allowing us to track learning-related changes in neural activity across repetitions. Repeated pseudoword production yielded comparable behavioural learning across groups, with faster productions by later repetitions. Post-scan, form-referent recognition was comparable across groups, whereas pseudoword repetition accuracy was lower in DLD. Pseudoword repetition engaged a distributed neural network, including inferior frontal cortex bilaterally (greater on the left), premotor and sensorimotor cortex, and posterior temporal and occipital regions. Group differences emerged primarily in regions where activity was task negative (i.e., below baseline or deactivated): lateral occipito-parietal cortex (posterior angular gyrus), medial parieto-occipital cortex (retrosplenial), and right posterior cingulate cortex. Learning-related decreases in activity were similar across groups, but region-of-interest analyses showed reduced leftward lateralisation of activity in inferior frontal gyrus in DLD. These findings suggest weaker disengagement of the default mode network during a linguistically demanding task in DLD. Although repetition-based pseudoword learning recruited similar neural mechanisms in both groups, these mechanisms may operate less efficiently in DLD, alongside reduced hemispheric specialisation in inferior frontal cortex. HighlightsO_LISimilar repetition-related neural attenuation across groups during pseudoword learning. C_LIO_LIReduced default-mode network suppression during pseudoword repetition in DLD. C_LIO_LIReduced left-hemisphere specialisation of inferior frontal cortex in DLD. C_LIO_LIRepetition-based learning in DLD supported by less efficient neural networks. C_LI
Humphreys, G. F.; Ralph, M. L.
Show abstract
This study integrates three literatures typically examined in isolation: single-concept semantics, combinatorial semantics, and theory of mind (ToM). We argue that these domains share overlapping computational principles and neuroanatomical networks. Here, we report three major investigations with converging methodological approaches: a meta-analysis of 410 neuroimaging studies, a large omnibus fMRI cross-study comparison (drawing on data from over 150 participants), and two targeted fMRI studies that integrate large language models with traditional psycholinguistic measures, allowing us to quantify combinatorial processing demands and predict brain activation. Across methods, convergent evidence identified a stable, bilateral ATL-STS-TPJ network supporting transmodal combinatorial semantic processing. In addition, the anterior temporal lobe (ATL) showed graded functional specialisation: ventral ATL responded equivalently to single-concept and combinatorial semantics, consistent with a domain-general semantic role, whereas lateral superior ATL was selectively recruited by combinatorial demands. Although ATL-STS-TPJ overlapped with ToM-related activation, targeted control analyses demonstrated that this overlap was eliminated when lexico-syntactic and semantic coherence demands were controlled. Together, these findings support a multimodal combinatorial semantic network centred on bilateral ATL-STS-TPJ with implications for theories of semantic and social cognition. On the basis of these results, we propose a unified theoretical framework of semantic processing.
Kherbawy, N.; Potter, C. E.; Jaffe-Dax, S.
Show abstract
Learning to read leads to widespread changes in brain organization, but it is not yet known when text first becomes a privileged stimulus. To test whether specialized neural responses to text appear prior to reading instruction, 31 monolingual toddlers in Israel (2.1-3.6 years) not yet enrolled in school were presented with displays of real, native text and visually matched non-text symbols. Using functional near-infrared spectroscopy, we found different patterns of activity in response to text vs. non-text across multiple cortical regions. Most notably, text elicited more activity in the ventrolateral prefrontal cortex, a region associated with language processing. These results challenge the view that the reading network emerges in response to gains in reading proficiency and instead suggest that through implicit sensitivity to regularities in their input, toddlers may be able to discover that text is a meaningful stimulus and begin to develop associations between language and text. Research HighlightsO_LIToddlers show different neural responses to real text vs. non-text symbols. C_LIO_LIUnfamiliar symbols evoke a novelty response in multiple cortical regions. C_LIO_LIText elicits more activity in a left ventrolateral prefrontal cortex, a region associated with processing language. C_LIO_LIBefore they know how to read, toddlers may recognize text as a frequent, familiar stimulus that is linked to language. C_LI
Ozker, M.; Takashima, A.; Giglio, L.; Hintz, F.; Meyer, A.; Hagoort, P.
Show abstract
Language processing is supported by distributed neural systems. Yet most research examines these systems at the population-average level, obscuring how individual cognitive differences shape language-related brain activity. In this study, we combined comprehensive cognitive assessments and task-based fMRI in a large sample of healthy adults (N = 205) to examine how variability in linguistic knowledge, working memory, processing speed, and non-verbal reasoning influenced neural responses in four language tasks: lexical decision, picture naming, sentence comprehension, and sentence production. All tasks engaged canonical left-lateralized language regions. However, individual differences in cognitive skills were not associated with modulations within commonly activated regions, but rather with modulations in domain-general systems outside traditional perisylvian language areas, mainly the default mode and dorsal attention networks. Notably, activations in these domain-general regions were predominantly negatively correlated with cognitive skills, indicating that individuals with lower cognitive skills draw on these broader neural resources more than higher-skilled individuals, possibly as a compensatory mechanism. These results reveal that while canonical language regions are consistently engaged during language tasks, the recruitment of domain-general systems acts as a variable resource modulated by individuals cognitive skills. Overall, our findings demonstrate that individual cognitive profiles determine how distributed brain systems are dynamically engaged to scaffold language processing.
Tseng, T.; Thibault, S.; Krzonowski, J.; Canault, M.; Roy, A.; Brozzoli, C.; Boulenger, V.
Show abstract
Speech perception relies on the integration of auditory and articulatory information, yet the precise role of motor regions remains debated. Cross-linguistic approaches and challenging listening situations can help fill this gap. We combined behavioral measures and fMRI with multivariate pattern analyses to investigate cortical representations of native French and non-native Mandarin consonant perception under clear and noisy conditions. Cross-modal classification analysis showed that articulatory features of degraded native labial and dental consonants are mapped somatotopically in right lip and tongue motor areas, regions also activated during consonant production. These representations may support phoneme categorization by compensating for degraded input. Representational similarity analysis further revealed that a network encompassing bilateral temporal and frontal motor regions encodes phonetic features of native and non-native consonants, including place and manner of articulation. Our findings highlight that speech perception relies on embodied sensorimotor representations, which contribute to decoding phonetic features both within and across languages.
Lallier, M.; Rius-Manau, C.; 23andMe Research Team, ; Carrion-Castillo, A.
Show abstract
Here, we test the hypothesis that early sustained exposure to complex bilingual environments can positively affect reading development by altering structural interhemispheric connectivity via the corpus callosum (CC). Interhemispheric connectivity has been shown to be inefficient in dyslexia, but also to support compensatory pathways when genetic risk for reading difficulties is present, by enabling the preserved right hemisphere to support a dysfunctional left hemisphere. Mediation models were conducted on children aged 9-10 years (with a 2-year follow-up assessment) from the Adolescent Brain Cognitive Development database (N>10,000). Polygenic scores (PGS) for dyslexia and cognitive performance and continuous bilingualism indices were used as predictors, with reading aloud as the outcome. Bilingualism showed a positive effect on reading partially mediated by the anterior CC, independently of overall brain size. In contrast, genetic predispositions to reading difficulties influenced reading primarily through overall brain size rather than CC connectivity specifically. These two pathways were independent, suggesting that bilingual experience and genetic risk operate through distinct neuroanatomical mechanisms. These findings suggest that recurrent early exposure to complex bilingual environments may shape the brains structural connectivity toward a more balanced and integrated bilateral frontal organisation. The results highlight potential brain compensatory pathways induced by environmental experiences that may support more efficient reading development and mitigate risks for developmental dyslexia.
Saloranta, E.; Tuulari, J. J.; Pulli, E. P.; Audah, H. K.; Barron, A.; Jolly, A.; Rosberg, A.; Mariani Wigley, I. L. C.; Kurila, K.; Yada, A.; Yli-Savola, A.; Savo, S.; Eskola, E.; Fernandes, M.; Korja, R.; Merisaari, H.; Saukko, E.; Kumpulainen, V.; Copeland, A.; Silver, E.; Karlsson, H.; Karlsson, L.; Mainela-Arnold, E.
Show abstract
Previous studies exploring the connection between early language development and brain anatomy have shown that cortical areas relating to individual differences in language skills are diverse and vary depending on the age of child. However, due to lack of large longitudinal samples, current literature is limited in answering the extent to which individual differences in language development prior to school age are reflected in areas of the cortex. To fill this gap, we compared gray matter density between participants that belonged to different longitudinally defined language profiles from 14 months to five years of age in a large population-based sample. Participants were 166 children from the FinnBrain Birth Cohort Study who had longitudinal language data from 14 months to five years of age and magnetic resonance imaging data at five years of age. Three groups of language development were used as per our prior study: persistent low, stable average, and stable high. Voxel-based morphometry metrics were calculated using SPM12 and the three language profile groups were compared to one another. Covariates included sex and age at brain scan. The statistics were thresholded at p < 0.01 and false discovery rate corrected at the cluster level. Of the three longitudinal language profiles, the stable high group had higher gray matter density than the persistent low group in the right superior frontal gyrus. No differences were found between the stable average and stable high groups, nor persistent low and stable average groups. The identified superior frontal cortical area belongs to executive functions neural network. This finding adds to the cumulating evidence that individual differences in language development are reflected in growth of gray matter supporting general processing ability rather than specialized language regions. The results suggest that cognitive development and early language development are linked through shared principles of neural growth, identifiable already at age five. Key pointsO_LIAn association between early language development from 14 months to five years of age and gray matter density differences of the right superior frontal gyrus was found at the age of five years. Children following the strongest language trajectory were more likely to exhibit higher gray matter density of the right superior frontal gyrus than children following the weakest trajectory. C_LIO_LIAs the superior frontal gyrus is part of executive functions network, we propose that individual differences in early language development are more defined by general learning mechanisms supported by those networks, rather than language specific pathways. C_LI
Lin, K.-Y.; Wolna, A.; Szewczyk, J.; Timmer, K.; Diaz, M.; Wodniecka, Z.
Show abstract
When bilinguals frequently switch between their first (L1) and second (L2) languages during speech production, we usually observe two phenomena: (i) language switch cost, where switching to a different language is more difficult than staying in the same one, and (ii) reversed language dominance, where L1 production becomes slower than L2 production. These effects are thought to reflect language control mechanisms, yet the underlying neural bases remain debated. In this study, we addressed this question by using the precision functional magnetic resonance imaging (fMRI) based on functional localization. Forty-one Polish-English bilinguals performed a language switching task (LST), in which they named pictures in L1 or L2 based on color cues. We investigated mechanisms behind two indices of language control commonly observed in the LST. First, we asked whether the domain-general resources supporting language switch cost overlap with nonverbal task switch cost. Second, we asked whether reversed language dominance reflects changes in language activation in the language-specific system, or whether it is related to increased engagement of domain-general control mechanisms. Results indicated that the language switch cost and nonverbal task switch cost share overlapping domain-general neural mechanisms. Similar to the language switch cost, reversed language dominance primarily engages domain-general processes rather than language-specific resources. HighlightsO_LIfMRI combined with functional localization approach is implemented to examine the neural mechanisms underlying language switch cost and reversed language dominance. C_LIO_LILanguage switch cost relies on neural mechanisms shared with nonverbal switch cost within the Multiple Demand network. C_LIO_LIReversed language dominance is primarily supported by the domain-general rather than the language-specific mechanisms. C_LIO_LIDomain-general neural mechanisms play a pivotal role in bilingual language switching in speech production. C_LI
Ceravolo, L.; Thomasson, M.; Constantin, I. M.; Stiennon, E.; Chassot, E.; Pierce, J.; Cionca, A.; Grandjean, D.; Sveikata, L.; Assal, F.; Peron, J.
Show abstract
Emotional prosody processing involves a widespread network of brain regions, but the specific roles of the cerebellum and basal ganglia in explicit and implicit tasks are not well known or understood. This study investigated how the cerebellum and basal ganglia contribute to explicit (emotion categorization) and implicit (gender categorization) processing of emotional prosody, namely when attention is directly versus implicitly oriented towards the emotion of the voice stimuli, respectively. Twenty-eight healthy French-speaking participants (average age: 65 years old) underwent high-resolution functional MRI while performing explicit and implicit vocal emotion processing tasks. Neuroimaging results revealed--and replicated--that both tasks recruited a widespread network, including the superior temporal cortex, inferior frontal cortex, primary motor and somatosensory cortices, basal ganglia, and cerebellum. The explicit task elicited stronger activations in the basal ganglia (caudate nucleus, putamen) and cerebellar regions (Crus I/II, lobules VI, VIIb, and X), consistent with higher cognitive control demands. In contrast, the implicit task was associated with activations in cerebellar lobules IV-V, VI, VIII, and IX, along with the thalamus. Regression-based functional connectivity analyses further demonstrated stronger connectivity between the right cerebellar lobule IX and the putamen, as well as the cerebellar vermis (XII), particularly during implicit processing. These findings highlight the distinct contributions of the cerebellum and basal ganglia to emotional prosody processing, with explicit tasks engaging associative and cognitive control networks, while implicit tasks rely more on sensorimotor and automatic neural processing mechanisms.
Ivanova, E.; Farran, E. K.; Soltanlou, M.
Show abstract
Because early maths skills strongly predict later outcomes, it is crucial to understand the mechanisms that shape early learning in children. The recent years have seen an increase in studying the neural correlates that support the acquisition of maths skills. However, existing work in early childhood has primarily focused on core number-processing regions in the parietal regions, with comparatively little attention to the supportive role of prefrontal regions. In this study, we examined the engagement of the prefrontal regions when matching numbers and objects. Children (N=60, 25 girls, aged 2.74-5.18 years) matched auditory small (1-3) and large (5-7) numbers, as well as objects (fruits) to corresponding visual pictures while their frontoparietal brain responses were recorded using functional near-infrared spectroscopy (fNIRS). Importantly, matching large numbers was substantially more difficult than matching small numbers or objects. The analysis revealed that children had increased activation in the right middle frontal gyrus when matching large numbers, compared to small numbers. However, there was no difference in the prefrontal region between matching small numbers and objects. The connectivity analysis further revealed increased frontoparietal connectivity when matching small numbers, but not large numbers or objects. Our findings suggest that prefrontal involvement during early numerical knowledge acquisition relies primarily on domain-general mechanisms, with number-specific responses likely to emerge later in development.
Wang, R.; Guo, Q.; Zeng, X.; Leong, C.; Zhang, C.; Zhang, Y.; Abutalebi, J.; Myachykov, A.
Show abstract
BackgroundThe brains glymphatic system plays a vital role in maintaining neural health. However, little is known about whether second language (L2) immersion can influence this clearance pathway. Methods50 high-proficiency L2 English speakers (mean age: 32.6 years; 78% female) were assessed for glymphatic function using three multimodal MRI markers: BOLD-CSF coupling strength (fMRI), choroid plexus ratio (structural MRI), and DTI-ALPS index (diffusion MRI). Analyses examined relationships between glymphatic markers and L2 immersion duration, age of acquisition (AOA), and active use environment, controlling for age, education, and sex. ResultsL2 immersion duration correlated significantly with better glymphatic function. Longer immersion related to better BOLD-CSF coupling strength (r = -0.315, p < 0.05) and decreased choroid plexus ratios (r = -0.39, p < 0.05), suggesting enhanced brain-CSF coordination and fewer pathological CSF production structures. Mediation analyses demonstrated that immersion influenced ALPS indirectly through effects on choroid plexus morphology and BOLD-CSF coupling. L2 AOA moderated the immersion-coupling relationship: individuals who began learning after age 9.53 showed stronger associations between immersion and BOLD-CSF coupling, though AOA did not moderate choroid plexus effects. As for L2 immersive active is associated with better glymphatic function, while L2 immersive passive and L2 non-immersive active are both unrelated. ConclusionsL2 immersion associates with better glymphatic system function through multiple pathways, including improved brain-CSF coordination, optimized choroid plexus structure, and increased perivascular flow. These findings provide novel neurobiological evidence that bilingual experience may confer neuroprotective benefits through brain waste clearance mechanisms.
Jimenez-Sanchez, L.; Thye, M.; Richardson, H.
Show abstract
3.The fusiform face area (FFA) preferentially responds to faces within the first months of life. One hypothesis is that higher-order social responses in middle medial prefrontal cortex (MMPFC) or face responses in superior temporal sulcus (STS) drive the development of face-selective responses in FFA, with right-hemisphere dominance in FFA eventually arising from lateralised connections to these regions. Another hypothesis proposes an innate face template in the amygdala guides attention to face-like shapes. This study opportunistically examined the development of the FFA, MMPFC, STS, and amygdala in childhood using an open cross-sectional movie-viewing fMRI dataset with 3-12-year-olds (N=117, M=6.77 years) and adults (N=33, M=24.77 years). We tested for correlations between FFA development and development in MMPFC, STS, and amygdala on the premise that associations between these regions may be observable even in children, and such associations could constrain hypotheses and analytic approaches in future studies with infants. First, we measured functional maturity-how similar each childs response to the movie was to an adult average response timecourse. In all regions, older childrens responses were more adult-like. Next, we tested whether FFA maturity correlated with functional connectivity with, or functional maturity of, MMPFC, STS, or amygdala. Children with more mature right FFA responses showed stronger right FFA-right MMPFC connectivity. Children with more mature FFA responses also had more mature STS responses, bilaterally. This study provides preliminary evidence that FFA co-develops with higher-order social brain regions and specific metrics to take forward in future research with infants. HighlightsO_LIWhat drives face selective responses in FFA is the subject of recent debate. C_LIO_LI117 children aged 3 to 12 years watched a short movie while undergoing fMRI. C_LIO_LIRight FFA development correlated with functional connectivity to right MMPFC C_LIO_LIFFA development correlated with STS development, bilaterally. C_LIO_LIFFA codevelops with higher-order social brain regions (controlling for age). C_LI